Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
1.
Assist Technol ; 36(1): 60-63, 2024 01 02.
Article in English | MEDLINE | ID: mdl-37115821

ABSTRACT

Based on statistics from the WHO and the International Agency for the Prevention of Blindness, an estimated 43.3 million people have blindness and 295 million have moderate and severe vision impairment globally as of 2020, statistics expected to increase to 61 million and 474 million respectively by 2050, staggering numbers. Blindness and low vision (BLV) stultify many activities of daily living, as sight is beneficial to most functional tasks. Assistive technologies for persons with blindness and low vision (pBLV) consist of a wide range of aids that work in some way to enhance one's functioning and support independence. Although handheld and head-mounted approaches have been primary foci when building new platforms or devices to support function and mobility, this perspective reviews potential shortcomings of these form factors or embodiments and posits that a body-centered approach may overcome many of these limitations.


Subject(s)
Vision, Low , Visually Impaired Persons , Wearable Electronic Devices , Humans , Activities of Daily Living , Visual Acuity , Blindness
2.
IEEE J Transl Eng Health Med ; 11: 523-535, 2023.
Article in English | MEDLINE | ID: mdl-38059065

ABSTRACT

OBJECTIVE: People with blindness and low vision face substantial challenges when navigating both indoor and outdoor environments. While various solutions are available to facilitate travel to and from public transit hubs, there is a notable absence of solutions for navigating within transit hubs, often referred to as the "middle mile". Although research pilots have explored the middle mile journey, no solutions exist at scale, leaving a critical gap for commuters with disabilities. In this paper, we proposed a novel mobile application, Commute Booster, that offers full trip planning and real-time guidance inside the station. METHODS AND PROCEDURES: Our system consists of two key components: the general transit feed specification (GTFS) and optical character recognition (OCR). The GTFS dataset generates a comprehensive list of wayfinding signage within subway stations that users will encounter during their intended journey. The OCR functionality enables users to identify relevant navigation signs in their immediate surroundings. By seamlessly integrating these two components, Commute Booster provides real-time feedback to users regarding the presence or absence of relevant navigation signs within the field of view of their phone camera during their journey. RESULTS: As part of our technical validation process, we conducted tests at three subway stations in New York City. The sign detection achieved an impressive overall accuracy rate of 0.97. Additionally, the system exhibited a maximum detection range of 11 meters and supported an oblique angle of approximately 110 degrees for field of view detection. CONCLUSION: The Commute Booster mobile application relies on computer vision technology and does not require additional sensors or infrastructure. It holds tremendous promise in assisting individuals with blindness and low vision during their daily commutes. Clinical and Translational Impact Statement: Commute Booster translates the combination of OCR and GTFS into an assistive tool, which holds great promise for assisting people with blindness and low vision in their daily commute.


Subject(s)
Mobile Applications , Self-Help Devices , Vision, Low , Humans , Transportation , Blindness
3.
PLOS Digit Health ; 2(6): e0000275, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37339135

ABSTRACT

Visual impairment represents a significant health and economic burden affecting 596 million globally. The incidence of visual impairment is expected to double by 2050 as our population ages. Independent navigation is challenging for persons with visual impairment, as they often rely on non-visual sensory signals to find the optimal route. In this context, electronic travel aids are promising solutions that can be used for obstacle detection and/or route guidance. However, electronic travel aids have limitations such as low uptake and limited training that restrict their widespread use. Here, we present a virtual reality platform for testing, refining, and training with electronic travel aids. We demonstrate the viability on an electronic travel aid developed in-house, consist of a wearable haptic feedback device. We designed an experiment in which participants donned the electronic travel aid and performed a virtual task while experiencing a simulation of three different visual impairments: age-related macular degeneration, diabetic retinopathy, and glaucoma. Our experiments indicate that our electronic travel aid significantly improves the completion time for all the three visual impairments and reduces the number of collisions for diabetic retinopathy and glaucoma. Overall, the combination of virtual reality and electronic travel aid may have a beneficial role on mobility rehabilitation of persons with visual impairment, by allowing early-phase testing of electronic travel aid prototypes in safe, realistic, and controllable settings.

4.
Trials ; 24(1): 169, 2023 Mar 07.
Article in English | MEDLINE | ID: mdl-36879333

ABSTRACT

BACKGROUND: Blind/low vision (BLV) severely limits information about our three-dimensional world, leading to poor spatial cognition and impaired navigation. BLV engenders mobility losses, debility, illness, and premature mortality. These mobility losses have been associated with unemployment and severe compromises in quality of life. VI not only eviscerates mobility and safety but also, creates barriers to inclusive higher education. Although true in almost every high-income country, these startling facts are even more severe in low- and middle-income countries, such as Thailand. We aim to use VIS4ION (Visually Impaired Smart Service System for Spatial Intelligence and Onboard Navigation), an advanced wearable technology, to enable real-time access to microservices, providing a potential solution to close this gap and deliver consistent and reliable access to critical spatial information needed for mobility and orientation during navigation. METHODS: We are leveraging 3D reconstruction and semantic segmentation techniques to create a digital twin of the campus that houses Mahidol University's disability college. We will do cross-over randomization, and two groups of randomized VI students will deploy this augmented platform in two phases: a passive phase, during which the wearable will only record location, and an active phase, in which end users receive orientation cueing during location recording. A group will perform the active phase first, then the passive, and the other group will experiment reciprocally. We will assess for acceptability, appropriateness, and feasibility, focusing on experiences with VIS4ION. In addition, we will test another cohort of students for navigational, health, and well-being improvements, comparing weeks 1 to 4. We will also conduct a process evaluation according to the Saunders Framework. Finally, we will extend our computer vision and digital twinning technique to a 12-block spatial grid in Bangkok, providing aid in a more complex environment. DISCUSSION: Although electronic navigation aids seem like an attractive solution, there are several barriers to their use; chief among them is their dependence on either environmental (sensor-based) infrastructure or WiFi/cell "connectivity" infrastructure or both. These barriers limit their widespread adoption, particularly in low-and-middle-income countries. Here we propose a navigation solution that operates independently of both environmental and Wi-Fi/cell infrastructure. We predict the proposed platform supports spatial cognition in BLV populations, augmenting personal freedom and agency, and promoting health and well-being. TRIAL REGISTRATION: ClinicalTrials.gov under the identifier: NCT03174314, Registered 2017.06.02.


Subject(s)
Vision, Low , Humans , Quality of Life , Thailand , Universities , Intelligence , Randomized Controlled Trials as Topic
5.
Disabil Rehabil Assist Technol ; : 1-10, 2023 Mar 16.
Article in English | MEDLINE | ID: mdl-36927193

ABSTRACT

PURPOSE: Visual impairment-related disabilities have become increasingly pervasive. Current reports estimate a total of 36 million persons with blindness and 217 million persons with moderate to severe visual impairment worldwide. Assistive technologies (AT), including text-to-speech software, navigational/spatial guides, and object recognition tools have the capacity to improve the lives of people with blindness and low vision. However, access to such AT is constrained by high costs and implementation barriers. More recently, expansive growth in mobile computing has enabled many technologies to be translated into mobile applications. As a result, a marketplace of accessibility apps has become available, yet no framework exists to facilitate navigation of this voluminous space. MATERIALS AND METHODS: We developed the BLV (Blind and Low Vision) App Arcade: a fun, engaging, and searchable curated repository of app AT broken down into 11 categories spanning a wide variety of themes from entertainment to navigation. Additionally, a standardized evaluation metric was formalized to assess each app in five key dimensions: reputability, privacy, data sharing, effectiveness, and ease of use/accessibility. In this paper, we describe the methodological approaches, considerations, and metrics used to find, store and score mobile applications. CONCLUSION: The development of a comprehensive and standardized database of apps with a scoring rubric has the potential to increase access to reputable tools for the visually impaired community, especially for those in low- and middle-income demographics, who may have access to mobile devices but otherwise have limited access to more expensive technologies or services.


A wide array of assistive mobile applications now serve as low cost, convenient, and effective alternatives to standard tools in the rehabilitation domain.Given an extensive (and growing) marketplace of assistive apps, we highlight the importance of developing standardized evaluation frameworks that serve to assess the merit, functionality, and accessibility of tools in respective rehabilitation fields.To provide an introduction to a novel resource accessible to the public to exhibit verified and reliable assistive apps for the visually impaired community, especially for those in low- and middle-income demographics who may not have access to common technologies and services.

7.
Sensors (Basel) ; 22(22)2022 Nov 17.
Article in English | MEDLINE | ID: mdl-36433501

ABSTRACT

Vision-based localization approaches now underpin newly emerging navigation pipelines for myriad use cases, from robotics to assistive technologies. Compared to sensor-based solutions, vision-based localization does not require pre-installed sensor infrastructure, which is costly, time-consuming, and/or often infeasible at scale. Herein, we propose a novel vision-based localization pipeline for a specific use case: navigation support for end users with blindness and low vision. Given a query image taken by an end user on a mobile application, the pipeline leverages a visual place recognition (VPR) algorithm to find similar images in a reference image database of the target space. The geolocations of these similar images are utilized in a downstream task that employs a weighted-average method to estimate the end user's location. Another downstream task utilizes the perspective-n-point (PnP) algorithm to estimate the end user's direction by exploiting the 2D-3D point correspondences between the query image and the 3D environment, as extracted from matched images in the database. Additionally, this system implements Dijkstra's algorithm to calculate a shortest path based on a navigable map that includes the trip origin and destination. The topometric map used for localization and navigation is built using a customized graphical user interface that projects a 3D reconstructed sparse map, built from a sequence of images, to the corresponding a priori 2D floor plan. Sequential images used for map construction can be collected in a pre-mapping step or scavenged through public databases/citizen science. The end-to-end system can be installed on any internet-accessible device with a camera that hosts a custom mobile application. For evaluation purposes, mapping and localization were tested in a complex hospital environment. The evaluation results demonstrate that our system can achieve localization with an average error of less than 1 m without knowledge of the camera's intrinsic parameters, such as focal length.


Subject(s)
Robotics , Vision, Low , Humans , Algorithms , Robotics/methods , Databases, Factual , Blindness
8.
Sensors (Basel) ; 22(17)2022 Aug 30.
Article in English | MEDLINE | ID: mdl-36080991

ABSTRACT

Smart health applications have received significant attention in recent years. Novel applications hold significant promise to overcome many of the inconveniences faced by persons with disabilities throughout daily living. For people with blindness and low vision (BLV), environmental perception is compromised, creating myriad difficulties. Precise localization is still a gap in the field and is critical to safe navigation. Conventional GNSS positioning cannot provide satisfactory performance in urban canyons. 3D mapping-aided (3DMA) GNSS may serve as an urban GNSS solution, since the availability of 3D city models has widely increased. As a result, this study developed a real-time 3DMA GNSS-positioning system based on state-of-the-art 3DMA GNSS algorithms. Shadow matching was integrated with likelihood-based ranging 3DMA GNSS, generating positioning hypothesis candidates. To increase robustness, the 3DMA GNSS solution was then optimized with Doppler measurements using factor graph optimization (FGO) in a loosely-coupled fashion. This study also evaluated positioning performance using an advanced wearable system's recorded data in New York City. The real-time forward-processed FGO can provide a root-mean-square error (RMSE) of about 21 m. The RMSE drops to 16 m when the data is post-processed with FGO in a combined direction. Overall results show that the proposed loosely-coupled 3DMA FGO algorithm can provide a better and more robust positioning performance for the multi-sensor integration approach used by this wearable for persons with BLV.


Subject(s)
Geographic Information Systems , Records , Blindness , Data Collection , Humans , Likelihood Functions , New York
9.
Parkinsonism Relat Disord ; 84: 148-154, 2021 03.
Article in English | MEDLINE | ID: mdl-33526323

ABSTRACT

OBJECTIVE: To explore the potential rehabilitative effect of art therapy and its underlying mechanisms in Parkinson's disease (PD). METHODS: Observational study of eighteen patients with PD, followed in a prospective, open-label, exploratory trial. Before and after twenty sessions of art therapy, PD patients were assessed with the UPDRS, Pegboard Test, Timed Up and Go Test (TUG), Beck Depression Inventory (BDI), Modified Fatigue Impact Scale and PROMIS-Self-Efficacy, Montreal Cognitive Assessment, Rey-Osterrieth Complex Figure Test (RCFT), Benton Visual Recognition Test (BVRT), Navon Test, Visual Search, and Stop Signal Task. Eye movements were recorded during the BVRT. Resting-state functional MRI (rs-fMRI) was also performed to assess functional connectivity (FC) changes within the dorsal attention (DAN), executive control (ECN), fronto-occipital (FOC), salience (SAL), primary and secondary visual (V1, V2) brain networks. We also tested fourteen age-matched healthy controls at baseline. RESULTS: At baseline, PD patients showed abnormal visual-cognitive functions and eye movements. Analyses of rs-fMRI showed increased functional connectivity within DAN and ECN in patients compared to controls. Following art therapy, performance improved on Navon test, eye tracking, and UPDRS scores. Rs-fMRI analysis revealed significantly increased FC levels in brain regions within V1 and V2 networks. INTERPRETATION: Art therapy improves overall visual-cognitive skills and visual exploration strategies as well as general motor function in patients with PD. The changes in brain connectivity highlight a functional reorganization of visual networks.


Subject(s)
Art Therapy , Cognitive Dysfunction/physiopathology , Cognitive Dysfunction/rehabilitation , Connectome , Nerve Net/physiopathology , Neurological Rehabilitation , Parkinson Disease/physiopathology , Parkinson Disease/rehabilitation , Aged , Cognitive Dysfunction/diagnostic imaging , Cognitive Dysfunction/etiology , Eye-Tracking Technology , Female , Humans , Magnetic Resonance Imaging , Male , Middle Aged , Nerve Net/diagnostic imaging , Outcome Assessment, Health Care , Parkinson Disease/complications , Parkinson Disease/diagnostic imaging
10.
JAMA Neurol ; 78(2): 165-176, 2021 02 01.
Article in English | MEDLINE | ID: mdl-33136137

ABSTRACT

Importance: Accurate and up-to-date estimates on incidence, prevalence, mortality, and disability-adjusted life-years (burden) of neurological disorders are the backbone of evidence-based health care planning and resource allocation for these disorders. It appears that no such estimates have been reported at the state level for the US. Objective: To present burden estimates of major neurological disorders in the US states by age and sex from 1990 to 2017. Design, Setting, and Participants: This is a systematic analysis of the Global Burden of Disease (GBD) 2017 study. Data on incidence, prevalence, mortality, and disability-adjusted life-years (DALYs) of major neurological disorders were derived from the GBD 2017 study of the 48 contiguous US states, Alaska, and Hawaii. Fourteen major neurological disorders were analyzed: stroke, Alzheimer disease and other dementias, Parkinson disease, epilepsy, multiple sclerosis, motor neuron disease, migraine, tension-type headache, traumatic brain injury, spinal cord injuries, brain and other nervous system cancers, meningitis, encephalitis, and tetanus. Exposures: Any of the 14 listed neurological diseases. Main Outcome and Measure: Absolute numbers in detail by age and sex and age-standardized rates (with 95% uncertainty intervals) were calculated. Results: The 3 most burdensome neurological disorders in the US in terms of absolute number of DALYs were stroke (3.58 [95% uncertainty interval [UI], 3.25-3.92] million DALYs), Alzheimer disease and other dementias (2.55 [95% UI, 2.43-2.68] million DALYs), and migraine (2.40 [95% UI, 1.53-3.44] million DALYs). The burden of almost all neurological disorders (in terms of absolute number of incident, prevalent, and fatal cases, as well as DALYs) increased from 1990 to 2017, largely because of the aging of the population. Exceptions for this trend included traumatic brain injury incidence (-29.1% [95% UI, -32.4% to -25.8%]); spinal cord injury prevalence (-38.5% [95% UI, -43.1% to -34.0%]); meningitis prevalence (-44.8% [95% UI, -47.3% to -42.3%]), deaths (-64.4% [95% UI, -67.7% to -50.3%]), and DALYs (-66.9% [95% UI, -70.1% to -55.9%]); and encephalitis DALYs (-25.8% [95% UI, -30.7% to -5.8%]). The different metrics of age-standardized rates varied between the US states from a 1.2-fold difference for tension-type headache to 7.5-fold for tetanus; southeastern states and Arkansas had a relatively higher burden for stroke, while northern states had a relatively higher burden of multiple sclerosis and eastern states had higher rates of Parkinson disease, idiopathic epilepsy, migraine and tension-type headache, and meningitis, encephalitis, and tetanus. Conclusions and Relevance: There is a large and increasing burden of noncommunicable neurological disorders in the US, with up to a 5-fold variation in the burden of and trends in particular neurological disorders across the US states. The information reported in this article can be used by health care professionals and policy makers at the national and state levels to advance their health care planning and resource allocation to prevent and reduce the burden of neurological disorders.


Subject(s)
Cost of Illness , Disability-Adjusted Life Years/trends , Global Burden of Disease/trends , Global Health/trends , Nervous System Diseases/diagnosis , Nervous System Diseases/epidemiology , Humans , United States/epidemiology
13.
Cerebellum Ataxias ; 7(1): 14, 2020 Nov 13.
Article in English | MEDLINE | ID: mdl-33292609

ABSTRACT

BACKGROUND: Eye-hand coordination (EHC) is a sophisticated act that requires interconnected processes governing synchronization of ocular and manual motor systems. Precise, timely and skillful movements such as reaching for and grasping small objects depend on the acquisition of high-quality visual information about the environment and simultaneous eye and hand control. Multiple areas in the brainstem and cerebellum, as well as some frontal and parietal structures, have critical roles in the control of eye movements and their coordination with the head. Although both cortex and cerebellum contribute critical elements to normal eye-hand function, differences in these contributions suggest that there may be separable deficits following injury. METHOD: As a preliminary assessment for this perspective, we compared eye and hand-movement control in a patient with cortical stroke relative to a patient with cerebellar stroke. RESULT: We found the onset of eye and hand movements to be temporally decoupled, with significant decoupling variance in the patient with cerebellar stroke. In contrast, the patient with cortical stroke displayed increased hand spatial errors and less significant temporal decoupling variance. Increased decoupling variance in the patient with cerebellar stroke was primarily due to unstable timing of rapid eye movements, saccades. CONCLUSION: These findings highlight a perspective in which facets of eye-hand dyscoordination are dependent on lesion location and may or may not cooperate to varying degrees. Broadly speaking, the results corroborate the general notion that the cerebellum is instrumental to the process of temporal prediction for eye and hand movements, while the cortex is instrumental to the process of spatial prediction, both of which are critical aspects of functional movement control.

15.
Semin Neurol ; 39(6): 775-784, 2019 12.
Article in English | MEDLINE | ID: mdl-31847048

ABSTRACT

Accurate detection and interpretation of eye movement abnormalities often guides differential diagnosis, discussions on prognosis and disease mechanisms, and directed treatment of disabling visual symptoms and signs. A comprehensive clinical eye movement examination is high yield from a diagnostic standpoint; however, skillful recording and quantification of eye movements can increase detection of subclinical deficits, confirm clinical suspicions, guide therapeutics, and generate expansive research opportunities. This review encompasses an overview of the clinical eye movement examination, provides examples of practical diagnostic contributions from quantitative recordings of eye movements, and comments on recording equipment and related challenges.


Subject(s)
Eye Movement Measurements , Neurology/methods , Ocular Motility Disorders/diagnosis , Humans
16.
Prog Brain Res ; 249: 361-374, 2019.
Article in English | MEDLINE | ID: mdl-31325995

ABSTRACT

Within the domain of motor performance, eye-hand coordination centers on close relationships between visuo-perceptual, ocular and appendicular motor systems. This coordination is critically dependent on a cycle of feedforward predictions and feedback-based corrective mechanisms. While intrinsic feedback harnesses naturally available movement-dependent sensory channels to modify movement errors, extrinsic feedback may be provided synthetically by a third party for further supplementation. Extrinsic feedback has been robustly explored in hand-focused, motor control studies, such as through computer-based visual displays, highlighting the spatial errors of reaches. Similar attempts have never been tested for spatial errors related to eye movements, despite the potential to alter ocular motor performance. Stroke creates motor planning deficits, resulting in the inability to generate predictions of motor performance. In this study involving visually guided pointing, we use an interactive computer display to provide extrinsic feedback of hand endpoint errors in an initial baseline experiment (pre-) and then feedback of both eye and hand errors in a second experiment (post-) to chronic stroke participants following each reach trial. We tested the hypothesis that extrinsic feedback of eye and hand would improve predictions and therefore feedforward control. We noted this improvement through gains in the spatial and temporal aspects of eye-hand coordination or an improvement in the decoupling noted as incoordination post-stroke in previous studies, returning performance toward healthy, control behavior. More specifically, results show that stroke participants, following the interventional feedback for eye and hand, improved both their accuracy and timing. This was evident through a temporal re-synchronization between eyes and hands, improving correlations between movement timing, as well as reducing the overall time interval (delay) between effectors. These experiments provide a strong indication that an extrinsic feedback intervention at appropriate therapeutic doses may improve eye-hand coordination during stroke rehabilitation.


Subject(s)
Biofeedback, Psychology/physiology , Fixation, Ocular/physiology , Hand/physiopathology , Motor Activity/physiology , Psychomotor Performance/physiology , Stroke/physiopathology , Adult , Aged , Chronic Disease , Female , Humans , Male , Middle Aged , Pilot Projects , Stroke/therapy , Stroke Rehabilitation
17.
J Vis Exp ; (145)2019 03 21.
Article in English | MEDLINE | ID: mdl-30958457

ABSTRACT

The objective analysis of eye movements has a significant history and has been long proven to be an important research tool in the setting of brain injury. Quantitative recordings have a strong capacity to screen diagnostically. Concurrent examinations of the eye and upper limb movements directed toward shared functional goals (e.g., eye-hand coordination) serve as an additional robust biomarker-laden path to capture and interrogate neural injury, including acquired brain injury (ABI). While quantitative dual-effector recordings in 3-D afford ample opportunities within ocular-manual motor investigations in the setting of ABI, the feasibility of such dual recordings for both eye and hand is challenging in pathological settings, particularly when approached with research-grade rigor. Here we describe the integration of an eye tracking system with a motion tracking system intended primarily for limb control research to study a natural behavior. The protocol enables the investigation of unrestricted, three-dimensional (3D) eye-hand coordination tasks. More specifically, we review a method to assess eye-hand coordination in visually guided saccade-to-reach tasks in subjects with chronic middle cerebral artery (MCA) stroke and compare them to healthy controls. Special attention is paid to the specific eye- and limb-tracking system properties in order to obtain high fidelity data from participants post-injury. Sampling rate, accuracy, permissible head movement range given anticipated tolerance and the feasibility of use were several of the critical properties considered when selecting an eye tracker and an approach. The limb tracker was selected based on a similar rubric but included the need for 3-D recording, dynamic interaction and a miniaturized physical footprint. The quantitative data provided by this method and the overall approach when executed correctly has tremendous potential to further refine our mechanistic understanding of eye-hand control and help inform feasible diagnostic and pragmatic interventions within the neurological and rehabilitative practice.


Subject(s)
Ataxia/physiopathology , Eye/physiopathology , Hand/physiopathology , Psychomotor Performance , Female , Humans , Infarction, Middle Cerebral Artery/physiopathology , Male , Middle Aged , Saccades
SELECTION OF CITATIONS
SEARCH DETAIL
...